A Chicago street outreach worker, whose job it is to help stop gun violence, once told me that it felt like he and his colleagues were trying to build a shield on the battlefield. He wasn’t describing dodging bullets, but rather the immense pressure community violence intervention, or CVI, programs face to prove their worth through narrowly defined metrics and methods.
As gun violence devastates American cities, CVI efforts — which typically employ trusted community members to mediate conflicts and mentor at-risk individuals — have gained unprecedented funding. Chicago tripled its violence prevention budget between 2019 and 2021; Philadelphia committed $155 million for the 2022 fiscal year; the federal Bipartisan Safer Communities Act, which passed in 2022, includes $250 million for community violence interventions; and cities and states across the country were able to use portions of the American Rescue Plan Act for CVI programming.
But with this spotlight comes intense scrutiny. Policymakers and funders, desperate for solutions, are asking a deceptively difficult question: Do these programs work?
It’s a fair query — and a timely one with the future of federal funding and support for CVI efforts at risk with the incoming Trump administration. We should absolutely evaluate the impact of violence prevention strategies. The problem, however, lies in how we’re defining “work” and what evidence we accept as proof. Right now, a dangerously myopic view of scientific rigor threatens to undermine the field of community violence intervention at the very moment we need it most.
This crisis stems from our misplaced faith in narrowly defined causal research methods, with a single method, the randomized controlled trial, or RCT, sitting atop an overly reified hierarchy of approaches. RCTs remain powerful tools. But their elevation to a sacred standard that should be applied to every scientific investigation ignores both the empirical realities of violence and the idea that other methods might be better suited to evaluate social policies like CVI programs.
Using experimental methods to study crime and violence gained favor in the 1980s, when fearmongering often drove policy decisions over science; researchers sought refuge in medical research methods and their promise of clear cause-and-effect relationships. The logic was compelling: Randomly assign some people to receive an intervention while others don’t, then measure the difference in outcomes. RCTs rose as the “gold standard” and other experimental methods followed closely behind, with each step beneath the RCT indicating some lesser rigor. But what began as an advancement for scientific precision has become a straitjacket, constraining how we evaluate community violence prevention.
The reality of gun violence is complex, as is the politics of violence prevention. Trying to force this complex reality into the framework of randomized trials isn’t just misguided — it fundamentally misunderstands both the nature of gun violence, and the work required to prevent it.
Clinical-style experimentation in community violence work is often impossible or unethical to implement. Randomly assigning some high-risk individuals or communities to receive potentially life-saving interventions while withholding them from others raises serious ethical concerns, as does the distribution of resources to some historically marginalized communities but not others.
Meanwhile, gun violence itself defies randomization, often clustering in specific places and social networks. That interconnected nature means true control groups are often impossible to maintain since violence naturally spreads through communities and across neighborhood boundaries.
More importantly, leading scientists have long acknowledged that other rigorous methods may be equally effective at measuring the impact of social phenomena. And there are a range of statistical tools that can capture the complex and interrelated nature of gun violence.
This fixation on experimentation has real consequences. When CVI evaluations show mixed results — sometimes they work, sometimes they don’t — critics seize on this complexity as proof of failure. But we’re holding these programs to an impossible standard. Consider that, at least according to one estimate, less than 14 percent of “all drug development programs eventually lead to approval,” yet we don’t defund entire hospital systems when a new drug fails. And failed policing initiatives rarely lead to budget cuts. Why, then, do we demand perfection from CVI programs?
The fundamental flaw in our current approach to understanding CVIs is treating them as if it were a pill we could simply prescribe to individuals. Effective interventions aren’t so cut and dry, and they must work somewhere between individual participants and entire neighborhoods. CVI outreach workers don’t just mediate gang disputes or counsel isolated individuals; they navigate complex webs of relationships, build trust, and reshape community norms all while navigating a crowded hall of past traumas.
Moreover, the fixation on interrupting violence as the only meaningful outcome of CVI ignores vast swaths of what CVI workers actually do. Yes, mediating conflicts is crucial. But much of their time is spent on less dramatic but equally vital tasks that can lead to longer term community safety: mentoring youth, connecting people to services, organizing community events, and slowly rebuilding social fabric torn apart by decades of disinvestment.
So how do we chart a better course? We need a new approach that combines scientific rigor with community-centered practices and captures the full scope of this complex work. First, we must expand our definition of valid evidence beyond the narrow language of clinical trials — and ensure that policy makers recognize the long-held truth in social science that there are numerous methods that can unpack our most important social questions. This means embracing research that combines rigorous quantitative analysis with equally rigorous qualitative insights, while recognizing the validity of careful observational research and longitudinal studies that can capture change over time.
We also need to fundamentally rethink what we measure. Success in violence prevention isn’t just about counting fewer shootings — though that matters enormously. It’s about tracking shifts in social networks and community relationships, assessing improvements in trust among neighborhoods and civic participation, and evaluating multiple indicators of community safety that reflect the true impact of this work. Even as they interrupt gang disputes, CVI organizations are also encouraging residents to vote, hosting a wide range of neighborhood social events, opening food pantries, and helping with back-to-school activities.
Finally, we must center the expertise of those closest to the work. This means involving practitioners and those with lived experiences in research design and interpretation, incorporating local knowledge into evaluation frameworks, and learning from both the successes and challenges of implementation. The people doing this work every day have insights that no randomized trial could ever capture.
This isn’t about lowering standards. It’s about raising them. True scientific rigor lies in accurately capturing complex realities, not in forcing messy social phenomena into ill-fitting methodological boxes.
We face a pivotal moment. Gun violence continues to devastate communities across America. Community violence intervention offers a beacon of hope, but only if we evaluate it fairly, support its growth with patience and nuance, and ensure new and expanded funding streams independent of the federal government.
The lives of countless young people and their families depend on getting this right. We owe it to them to ask better questions and seek fuller answers in our quest to build safer communities.
Andrew V. Papachristos, Ph.D. is a professor of sociology and director of CORNERS: The Center for Neighborhood Engaged Research and Science at Northwestern University. He is a member of The Justice Collaboratory at Yale Law School.